Goto

Collaborating Authors

 Higher Education


I Struggled to Find a Job After College. To Pay Rent, I Started Doing Something Highly Controversial.

Slate

I Have a Warning for Everyone. Consider this my open admission. When I graduated from UC-Berkeley with my "useless" comparative literature degree, into one of the bleakest job markets in recent American memory, I thought to myself, . That was what brought me to marketing myself as an "academic editor," and an "admissions essay advisor," on various freelancing websites last fall. I figured I had done my fair share of editing for friends throughout the years, and I needed another gig to supplement my inconsistent substitute-teaching paychecks.


Counterfactual Fairness

Neural Information Processing Systems

Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. In this paper, we develop a framework for modeling fairness using tools from causal inference. Our definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. We demonstrate our framework on a real-world problem of fair prediction of success in law school.


The greatest risk of AI in higher education isn't cheating – it's the erosion of learning itself

AIHub

Public debate about artificial intelligence in higher education has largely orbited a familiar worry: cheating . Will students use chatbots to write essays? Should universities ban the tech? But focusing so much on cheating misses the larger transformation already underway, one that extends far beyond student misconduct and even the classroom. Universities are adopting AI across many areas of institutional life .


'I wish I could push ChatGPT off a cliff': professors scramble to save critical thinking in an age of AI

The Guardian

'I wish I could push ChatGPT off a cliff': professors scramble to save critical thinking in an age of AI Lea Pao, a professor of literature at Stanford University, has been experimenting with ways to get her students to learn offline. She has them memorize poems, perform at recitation events, look at art in the real world. It's an effort to reconnect them to the bodily experience of learning, she said, and to keep them from turning to artificial intelligence to do the work for them. "There's no AI-proof anything," Pao said. "Rather than policing it, I hope that their overall experiences in this class will show them that there's a way out."







On the Limitations of Fractal Dimension as a Measure of Generalization Charlie B. Tan University of Oxford Inés García-Redondo Imperial College London Qiquan Wang

Neural Information Processing Systems

Bounding and predicting the generalization gap of overparameterized neural networks remains a central open problem in theoretical machine learning. There is a recent and growing body of literature that proposes the framework of fractals to model optimization trajectories of neural networks, motivating generalization bounds and measures based on the fractal dimension of the trajectory. Notably, the persistent homology dimension has been proposed to correlate with the generalization gap.